← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 4 • Sub-Lesson 2

🤝 Ubuntu & Relational Ethics

"I am because we are" — communitarian approaches to technology ethics, and African frameworks for Just AI

What We'll Cover

The previous session introduced four philosophical lenses for AI ethics — consequentialism, deontology, virtue ethics, and ubuntu. This session takes a deep dive into the fourth lens and the broader ecosystem of African-grounded approaches to AI governance.

Western philosophical traditions tend to start from the individual: individual rights, individual duties, individual character. Ubuntu and relational ethics start from a fundamentally different place — from relationships, community, and the understanding that personhood itself is constituted through social bonds. This is not just locally relevant to a South African university. It offers insights that individualist frameworks systematically miss: about collective harm, relational responsibility, structural injustice, and the social fabric that technology can either strengthen or erode.

We will also examine the Research ICT Africa (RIA) Just AI Framework of Inquiry — a concrete, justice-oriented approach to AI governance developed explicitly from and for African contexts.

🌍 What Is Ubuntu?

Ubuntu is a philosophical tradition, not a slogan. Understanding what it actually entails — and what it does not — is essential before applying it to AI ethics.

Core Principles

"Umuntu ngumuntu ngabantu" — a person is a person through other people. Ubuntu holds that:

  • Personhood is relational: You become fully human through your relationships with others, not in isolation
  • Interdependence is fundamental: Individual flourishing depends on community flourishing — and vice versa
  • Communal responsibility: Ethical obligations extend to the community, not just to individuals directly affected by an action
  • Dignity through belonging: Every person has inherent worth, which is realised through meaningful participation in community life
  • Restorative orientation: When harm occurs, the goal is to restore relationships and reintegrate, not merely to punish

Ubuntu vs. Individualist Ethics

The contrast with dominant Western ethical traditions is not about one being "better" — it is about fundamentally different starting points that reveal different things:

  • Starting point: Individualist ethics begins from the autonomous individual and asks about rights and obligations. Ubuntu begins from relationships and asks how actions affect the web of social connections.
  • Unit of analysis: Western frameworks often assess harm to individuals. Ubuntu assesses harm to relationships and communities.
  • Concept of justice: Rights-based frameworks focus on fair procedures. Ubuntu asks whether the social fabric has been strengthened or weakened.
  • Accountability: Individualist ethics assigns responsibility to specific actors. Ubuntu understands responsibility as distributed and shared.

Neither framework sees everything. Each has blind spots that the other illuminates.

💡 A Living Tradition

Ubuntu is not a monolithic doctrine. It is a living philosophical tradition with many formulations across different African cultures and intellectual traditions — communitarian, relational-ontological, and deliberative-democratic readings carry substantially different implications. The treatment in this session necessarily simplifies. We encourage you to engage with the primary sources (Mhlambi, Birhane, Gwagwa et al.) to appreciate the depth and diversity of this philosophical tradition.

🔄 From Rationality to Relationality

Sabelo Mhlambi's influential paper argues that the philosophical foundations of Western AI ethics are inadequate — and that ubuntu offers a more robust alternative.

💡 The Central Argument

Western AI ethics frameworks are built on a Cartesian foundation: "I think, therefore I am." Personhood is defined through individual rational capacity. This creates a specific set of ethical priorities — autonomy, individual rights, rational consent — that are valuable but limited.

Mhlambi proposes a shift from "I think, therefore I am" to "I am because we are." If personhood is constituted through relationships rather than individual rationality, then the ethical questions about AI change fundamentally. The question is no longer just "does this AI respect individual autonomy?" but "does this AI strengthen or weaken the relationships and communities through which people become fully human?"

The Limits of Rights-Based Frameworks

Mhlambi identifies several things that individual rights frameworks struggle to address in the context of AI:

  • Collective harm: When AI systematically underrepresents a community in training data, the harm is to the community as a whole — not easily reducible to individual rights violations
  • Erosion of social trust: AI-generated misinformation damages the shared epistemic commons, but "social trust" is not an individual right that can be protected through conventional legal frameworks
  • Cultural erasure: AI systems trained primarily on English-language data from the Global North can homogenise knowledge and expression — a harm to cultural diversity that individual consent frameworks do not capture
  • Power asymmetries: When communities in the Global South are subjects of AI systems designed in the Global North, the power relationship is structural, not individual

What Relational Ethics Adds

An ubuntu-grounded approach to AI ethics asks different — and additional — questions:

  • Relational impact: How does this technology affect relationships between people? Between communities? Between knowledge traditions?
  • Community bonds: Does this AI use strengthen or weaken the communal bonds through which people flourish?
  • Inclusivity of consideration: Who is excluded from the community of those whose wellbeing matters? Whose voices are absent from the design and governance process?
  • Power dynamics: How does this technology redistribute power? Does it concentrate control or distribute agency?
  • Restorative potential: When AI causes harm, how can relationships be repaired — not just individual compensation paid?

📄 Key Reading

Mhlambi, S. (2020): "From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for AI Governance" — Carr Center Discussion Paper, Harvard Kennedy School. Free PDF. The foundational paper for understanding ubuntu as a framework for AI ethics and governance.

⚖️ Algorithmic Injustice: Birhane's Relational Approach

Abeba Birhane's work applies relational ethics directly to the question of algorithmic injustice — arguing that the problem is structural, not just technical.

The Argument

Algorithmic injustice cannot be fixed by debiasing individual systems. The conventional approach — identify bias in a dataset, remove it, retrain — treats injustice as a technical bug to be patched. Birhane argues this fundamentally misdiagnoses the problem.

Algorithms do not merely contain biases. They encode and amplify existing power relationships. A facial recognition system that performs poorly on dark-skinned faces is not just "biased" — it reflects and reinforces a world in which certain faces are valued more than others. The injustice is relational and structural, not just statistical.

A relational ethics approach asks not just "is this algorithm biased?" but "what relationships of power does this algorithm create, sustain, or disrupt?"

Implications for Researchers

When using AI in your own research, a relational lens prompts questions that technical frameworks do not:

  • Whose knowledge? Whose knowledge was used to train this model? Whose perspectives are represented — and whose are systematically excluded?
  • Who benefits? Who benefits from the outputs of this AI-assisted research? Are the communities whose data contributed to the training also benefiting from the results?
  • What is reproduced? What existing power relationships does this AI tool reproduce or amplify in my research context?
  • Whose categories? When AI applies categories to my data (thematic codes, classifications, groupings), whose conceptual framework is it applying? Is that framework appropriate for my research context?

📄 Key Reading

Birhane, A. (2021): "Algorithmic Injustice: A Relational Ethics Approach"Patterns, 2(2). Open access. Argues that algorithmic injustice is relational and structural, and that individualist fixes (debiasing, fairness metrics) are insufficient without attending to the power relationships that algorithms encode.

🏛️ The RIA Just AI Framework of Inquiry

Moving from philosophical foundations to a concrete governance framework: Research ICT Africa's Framework of Inquiry translates African justice values into a structured tool for AI research and policy.

Why "Just" AI — Not Just "Responsible" AI

The RIA Framework (Chetty & Sey, 2025) makes a precise distinction. Widely circulated global frameworks — including the EU Ethics Guidelines for Trustworthy AI, the OECD AI Principles, and UNESCO's Recommendation on the Ethics of AI — advocate for qualities like accountability, fairness, transparency, and trustworthiness. These are necessary conditions, but the framework argues they are insufficient to guarantee just outcomes.

The example is sharp: an AI tool made available across all socioeconomic backgrounds may still produce unjust outcomes if a skills deficit prevents users in under-resourced contexts from leveraging it effectively. The tool is "fair" by conventional metrics, but the outcome is not just.

Justice, in the RIA framework, is explicitly restorative and redistributive — not merely procedural.

Why an African Framework Matters Globally

Current AI ethics discourse is dominated by institutions in the Global North. This produces governance frameworks that encode specific epistemological assumptions, legal traditions, and economic interests that do not transfer straightforwardly to African contexts — or, arguably, to many other contexts beyond the societies in which they were developed.

  • Unique developmental challenges: Infrastructure deficits, diverse democratic systems, colonial legal legacies
  • Risk of consumer status: Without deliberate African-led governance, the continent risks being positioned as a consumer rather than a co-creator of AI technology
  • Epistemic justice: African knowledge systems, ethical frameworks, and lived experiences must be foundational to governance — not afterthoughts

💡 Four Structural Challenges

The framework identifies four overarching structural problems that AI governance must confront:

1. Inadequate governance for human dignity: Inherited colonial and global legal norms are insufficient for the complexities of AI. Data must be understood not as a neutral commodity but as a representation of human lived experience — its collection and use must be reciprocal and restorative, not extractive.

2. Western epistemological dominance: AI systems are designed and governed based on Western epistemologies, to the systematic exclusion of African knowledge systems. Generative AI is an acute threat here, as it risks eroding Africa's linguistic diversity and cultural expression through training data biases.

3. Extractive value flows: African data and digital labour generate immense value that flows outward to foreign technology companies with little reinvestment in local communities — a dynamic the framework terms "digital colonialism."

4. Environmental and socioeconomic costs: As Africa builds digital infrastructure, it risks inheriting and exacerbating AI's environmental debt — a theme we examined in detail in Week 3.

🔬 The Nine Core Inquiries

The framework structures its analysis around nine interconnected inquiries, each addressing a dimension of justice across the AI value chain.

1. Human Rights Primacy

Are AI systems developed and deployed in compliance with human rights frameworks? Are human rights impact assessments mandated at each stage of the AI lifecycle?

2. All of Humanity

Is the AI value chain globally inclusive? Do mechanisms exist to ensure that marginalised countries' voices and needs are incorporated into AI governance?

3. Structural Inequality

Do AI systems account for and challenge structural inequalities, rather than encoding and amplifying the historical biases embedded in training data?

4. Equity & Inclusion by Design

Are equity and inclusion built into every stage of the AI lifecycle — from problem formulation through deployment and monitoring — rather than retrofitted after the fact?

5. Ethics of Care

Are AI systems grounded in context and relationality? Do they prioritise human relationships and wellbeing — especially for less-powerful groups?

6. Democratic Governance

Is the AI value chain subject to democratic oversight and control? Are there mechanisms for meaningful public participation in AI governance?

7. Economic Justice

Are AI's benefits shared equitably? Are those negatively impacted compensated or supported? Does governance achieve restorative, redistributive justice?

8. Data Justice

Do AI systems respect personal and community data? Are both individual and collective data rights recognised? Is data used to empower rather than extract?

9. Sustainability

Does the AI value chain consider environmental and social impacts? Are assessments of environmental costs required — and acted upon?

💡 A Living Framework

The RIA Just AI Framework of Inquiry is currently in its alpha phase — it is being developed as a public good through open forums with policymakers, civil society organisations, and research networks across Africa. It is explicitly designed to be a living document, co-created with the communities it aims to serve. This is in deliberate contrast to governance frameworks developed externally and then applied to African contexts. Each inquiry is structured around three types of questions: research questions (for empirical analysis), policy analysis questions (for evaluating existing policies), and design questions (for system developers).

📄 The RIA Just AI Framework

Chetty, P. & Sey, A. (2025): "RIA Just AI Framework of Inquiry" — Research ICT Africa. Published October 2025 under Creative Commons Attribution 4.0 International licence. A justice-oriented framework for guiding AI research and policy, ensuring that marginalised, vulnerable, and under-resourced communities are systematically addressed.

🔗 Ubuntu and AI: Specific Applications

How do ubuntu philosophy and justice-oriented frameworks change the questions we ask about specific AI uses in research?

📊 Reframing Ethical Questions

Research Scenario Individualist Framework Asks Ubuntu / Just AI Framework Asks
Using AI to analyse interview data Did participants consent to AI processing? Is the data stored securely? Are individual identities protected? Whose interpretive categories does the AI apply? Does it impose external frameworks on lived experience? How are participants' communities affected by the analysis?
Publishing AI-assisted writing Was AI use disclosed? Does the researcher take responsibility for the content? Is the use permitted by journal policy? How does AI-assisted publication affect the researcher's relationship with their scholarly community? Does it undermine shared norms of intellectual contribution? Who is excluded from AI-assisted productivity gains?
Training AI on local/indigenous knowledge Was individual consent obtained? Are intellectual property rights respected? Is the data anonymised? Does the community as a whole consent to this use of their knowledge? Who benefits from the AI system trained on this knowledge? Is the relationship between knowledge-holders and technology-builders reciprocal or extractive?
Deploying AI in a community context Does the system meet accuracy thresholds? Are individual users informed? Is there a complaints mechanism? How does this technology affect community bonds and social organisation? Does it strengthen or weaken local decision-making capacity? Was the community involved in the design process?
Using AI for literature review in under-resourced languages Is the AI accurate? Are sources correctly cited? Is the review comprehensive? Does the AI systematically underrepresent scholarship in African languages? Does reliance on AI-curated literature reinforce the dominance of English-language knowledge production? What perspectives are structurally excluded?

⚠️ Not a Checklist

Ubuntu and the RIA Just AI Framework are not checklists to apply mechanically. They represent a fundamentally different orientation toward ethical reasoning — one that starts from relationships, community, and structural justice rather than individual rights and procedural fairness. The table above illustrates the kinds of additional questions these frameworks generate. The point is to expand your ethical vision, not to replace one set of boxes to tick with another.

🌐 AI Ethics in the Global South

The AI ethics conversation has been dominated by institutions in a small number of countries. What does the landscape look like from Africa, South America, and Southeast Asia?

Power Asymmetries

AI systems are overwhelmingly designed in the Global North and deployed globally. This creates structural asymmetries that conventional "fairness" frameworks do not adequately address:

  • Training data: Massively underrepresents Global South languages, knowledge systems, and contexts
  • Benefit distribution: Economic benefits of AI accrue disproportionately to technology-producing regions
  • Governance voice: African and Global South countries have limited influence in global AI governance forums
  • Infrastructure dependency: Reliance on cloud computing infrastructure owned and operated by foreign companies

Local Knowledge and Data Sovereignty

Critical questions about knowledge and power:

  • Data ownership: Who owns data about African communities? Who profits from it?
  • Community consent: Should AI trained on local knowledge require community consent — not just individual consent?
  • Benefit sharing: If AI systems generate value from local data, should communities share in the economic benefits?
  • Epistemic justice: The RIA framework's insistence that African knowledge systems must be foundational — not afterthoughts — to AI governance is both a theoretical and a practical demand

From Local Realities to Global Forums

The RIA Just AI Framework is explicitly designed to bridge this gap — generating evidence robust enough to inform national policy while also carrying weight in international governance dialogues. The aim is to shift Africa's position from a relatively weak voice in global conversations to an active and dynamic site of knowledge production, resistance, and governance.

As researchers at a South African university, you have a particular vantage point on these questions — and a particular responsibility to engage with them.

🔬 Case Study: The Esethu Framework — Data Sovereignty in Practice

The questions raised above are not merely theoretical. The Esethu Framework (Rajab et al., 2025) offers a concrete example of how African researchers are operationalising data sovereignty and community governance in AI development.

"Esethu" is an isiXhosa word meaning "ours" or "belonging to us" — a name that captures the framework's central principle: language data should belong to and benefit the communities who create it.

The framework addresses a specific problem: isiXhosa, spoken by 19 million people, has remarkably little publicly available speech recognition data. Traditional open-data licences, while well-intentioned, tend to benefit better-resourced organisations — often outside Africa — who can most easily capitalise on freely available datasets. Meanwhile, restrictive proprietary licences lock out the very communities who contributed the data.

The Esethu Framework's solution is a dual-licence structure:

  • Non-commercial use is released under a Creative Commons licence, keeping research open
  • Commercial use requires a licence fee — but this fee is waived for African entities (including diaspora organisations), and all revenue is legally mandated to be reinvested in producing further African-language datasets

This is ubuntu in practice: the framework ensures that communities remain the primary beneficiaries of their own data, that economic value flows back rather than being extracted, and that the relationship between data contributors and technology builders is reciprocal rather than exploitative. The framework's proof of concept — a 10-hour isiXhosa speech dataset created by native speakers — demonstrates that community-governed data curation can produce high-quality resources while protecting community interests.

📄 Key Readings

Rajab, J. et al. (2025): "The Esethu Framework: Reimagining Sustainable Dataset Governance and Curation for Low-Resource Languages" — A community-driven framework for language data governance that operationalises data sovereignty and equitable benefit-sharing for African languages.

Okolo, C.T. (2023): "AI in the Global South: Opportunities and Challenges Towards More Equitable Governance" — Brookings Institution. Examines the equity dimensions of AI access and governance from a Global South perspective.

📚 Summary & Key Takeaways

This session explored ubuntu and African-grounded frameworks as essential complements to Western ethical traditions for navigating AI in research.

  • Ubuntu starts from relationships: Personhood is constituted through social bonds — ethical reasoning must attend to the impact of actions on communities and relationships, not just individuals
  • From rationality to relationality: Mhlambi's framework challenges the Cartesian foundations of Western AI ethics and proposes ubuntu as an alternative foundation for governance
  • Algorithmic injustice is structural: Birhane argues that debiasing individual systems is insufficient — the power relationships that algorithms encode must be examined
  • Justice, not just fairness: The RIA Just AI Framework moves beyond "responsible AI" to demand restorative and redistributive justice, centring marginalised communities across nine interconnected inquiries
  • Global South perspectives matter: AI ethics frameworks developed in the Global North carry epistemic assumptions that may not translate — African and other Global South voices must be foundational to global AI governance. Concrete initiatives like the Esethu Framework show how data sovereignty and community governance can be operationalised in practice
  • Not a checklist: These frameworks expand the range of questions you ask, not the range of boxes you tick

Next session: We move from ethical theory to practical application — transparency and disclosure norms, the AI authorship debate, bias in AI-assisted research, data privacy, academic integrity, and intellectual property.